223 research outputs found

    Vantagens e limitações das ontologias formais na área biomédica

    Get PDF
    Propomos uma tipologia dos artefatos de representação para as áreas de saúde e ciências biológicas, e a associação dessa tipologia com diferentes tipos de ontologia formal e lógica, chegando a conclusões quanto aos pontos fortes e limitações da ontologia de diferentes tipos de recursos lógicos, enquanto mantemos o foco na lógica descritiva. Consideramos quatro tipos de representação de área: (i) representação léxico-semântica, (ii) representação de tipos de entidades, (iii) representação de conhecimento prévio, e (iv) representação de indivíduos. Defendemos uma clara distinção entre os quatro tipos de representação, de forma a oferecer uma base mais racional para o uso das ontologias e artefatos relacionados no avanço da integração de dados e interoperabilidade de sistemas de raciocínio associados. Destacamos que apenas uma pequena porção de fatos cientificamente relevantes em áreas como a biomedicina pode ser adequadamente representada por ontologias formais, quando estas últimas são concebidas como representações de tipos de entidades. Particularmente, a tentativa de codificar conhecimento padrão ou probabilístico pela utilização de ontologias assim concebidas é fadada à produção de modelos não intencionais e errôneos

    OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4

    Get PDF
    BACKGROUND: Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. OBJECTIVE: We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. IMPLEMENTATION: In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. RESULTS: The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. CONCLUSIONS: The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers

    Validating archetypes for the Multiple Sclerosis Functional Composite

    Get PDF
    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time- consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool- enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model

    Validating archetypes for the Multiple Sclerosis Functional Composite

    Get PDF
    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time- consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool- enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model

    Telemedicine in Intensive Care Units: Protocol for a Scoping Review

    Get PDF
    Background: Telemedicine has been deployed to address issues in intensive care delivery, as well as to improve outcome and quality of care. Implementation of this technology has been characterized by high variability. Tele-intensive care unit (ICU) interventions involve the combination of multiple technological and organizational components, as well as interconnections of key stakeholders inside the hospital organization. The extensive literature on the benefits of tele-ICUs has been characterized as heterogeneous. On one hand, positive clinical and economical outcomes have been shown in multiple studies. On the other hand, no tangible benefits could be detected in several cases. This could be due to the diverse forms of organizations and the fact that tele-ICU interventions are complex to evaluate. The implementation context of tele-ICUs has been shown to play an important role in the success of the technology. The benefits derived from tele-ICUs depend on the organization where it is deployed and how the telemedicine systems are applied. There is therefore value in analyzing the benefits of tele-ICUs in relation to the characteristics of the organization where it is deployed. To date, research on the topic has not provided a comprehensive overview of literature taking both the technology setup and implementation context into account. Objective: We present a protocol for a scoping review of the literature on telemedicine in the ICU and its benefits in intensive care. The purpose of this review is to map out evidence about telemedicine in critical care in light of the implementation context. This review could represent a valuable contribution to support the development of tele-ICU technologies and offer perspectives on possible configurations, based on the implementation context and use case. Methods: We have followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist and the recommendations of the Joanna Briggs Institute methodology for scoping reviews. The scoping review and subsequent systematic review will be completed by spring 2021. Results: The preliminary search has been conducted. After removing all duplicates, we found 2530 results. The review can now be advanced to the next steps of the methodology, including literature database queries with appropriate keywords, retrieval of the results in a reference management tool, and screening of titles and abstracts. Conclusions: The results of the search indicate that there is sufficient literature to complete the scoping review. Upon completion, the scoping review will provide a map of existing evidence on tele-ICU systems given the implementation context. Findings of this research could be used by researchers, clinicians, and implementation teams as they determine the appropriate setup of new or existing tele-ICU systems. The need for future research contributions and systematic reviews will be identified. International Registered Report Identifier (IRRID): DERR1-10.2196/1969

    Telemedicine in Intensive Care Units: Scoping Review

    Get PDF
    Background: The role of telemedicine in intensive care has been increasing steadily. Tele-intensive care unit (ICU) interventions are varied and can be used in different levels of treatment, often with direct implications for the intensive care processes. Although a substantial body of primary and secondary literature has been published on the topic, there is a need for broadening the understanding of the organizational factors influencing the effectiveness of telemedical interventions in the ICU. Objective: This scoping review aims to provide a map of existing evidence on tele-ICU interventions, focusing on the analysis of the implementation context and identifying areas for further technological research. Methods: A research protocol outlining the method has been published in JMIR Research Protocols. This review follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews). A core research team was assembled to provide feedback and discuss findings. Results: A total of 3019 results were retrieved. After screening, 25 studies were included in the final analysis. We were able to characterize the context of tele-ICU studies and identify three use cases for tele-ICU interventions. The first use case is extending coverage, which describes interventions aimed at extending the availability of intensive care capabilities. The second use case is improving compliance, which includes interventions targeted at improving patient safety, intensive care best practices, and quality of care. The third use case, facilitating transfer, describes telemedicine interventions targeted toward the management of patient transfers to or from the ICU. Conclusions: The benefits of tele-ICU interventions have been well documented for centralized systems aimed at extending critical care capabilities in a community setting and improving care compliance in tertiary hospitals. No strong evidence has been found on the reduction of patient transfers following tele-ICU intervention

    Integrating clinical decision support systems for pharmacogenomic testing into clinical routine - a scoping review of designs of user-system interactions in recent system development

    Get PDF
    Background: Pharmacogenomic clinical decision support systems (CDSS) have the potential to help overcome some of the barriers for translating pharmacogenomic knowledge into clinical routine. Before developing a prototype it is crucial for developers to know which pharmacogenomic CDSS features and user-system interactions have yet been developed, implemented and tested in previous pharmacogenomic CDSS efforts and if they have been successfully applied. We address this issue by providing an overview of the designs of user-system interactions of recently developed pharmacogenomic CDSS. Methods: We searched PubMed for pharmacogenomic CDSS published between January 1, 2012 and November 15, 2016. Thirty-two out of 118 identified articles were summarized and included in the final analysis. We then compared the designs of user-system interactions of the 20 pharmacogenomic CDSS we had identified. Results: Alerts are the most widespread tools for physician-system interactions, but need to be implemented carefully to prevent alert fatigue and avoid liabilities. Pharmacogenomic test results and override reasons stored in the local EHR might help communicate pharmacogenomic information to other internal care providers. Integrating patients into user-system interactions through patient letters and online portals might be crucial for transferring pharmacogenomic data to external health care providers. Inbox messages inform physicians about new pharmacogenomic test results and enable them to request pharmacogenomic consultations. Search engines enable physicians to compare medical treatment options based on a patient’s genotype. Conclusions: Within the last 5 years, several pharmacogenomic CDSS have been developed. However, most of the included articles are solely describing prototypes of pharmacogenomic CDSS rather than evaluating them. To support the development of prototypes further evaluation efforts will be necessary. In the future, pharmacogenomic CDSS will likely include prediction models to identify patients who are suitable for preemptive genotyping
    corecore